Goto

Collaborating Authors

 sample-efficient monte-carlo planning


Blazing the trails before beating the path: Sample-efficient Monte-Carlo planning

Neural Information Processing Systems

We study the sampling-based planning problem in Markov decision processes (MDPs) that we can access only through a generative model, usually referred to as Monte-Carlo planning. Our objective is to return a good estimate of the optimal value function at any state while minimizing the number of calls to the generative model, i.e. the sample complexity. We propose a new algorithm, TrailBlazer, able to handle MDPs with a finite or an infinite number of transitions from state-action to next states. TrailBlazer is an adaptive algorithm that exploits possible structures of the MDP by exploring only a subset of states reachable by following near-optimal policies. We provide bounds on its sample complexity that depend on a measure of the quantity of near-optimal states. The algorithm behavior can be considered as an extension of Monte-Carlo sampling (for estimating an expectation) to problems that alternate maximization (over actions) and expectation (over next states). Finally, another appealing feature of TrailBlazer is that it is simple to implement and computationally efficient.


Reviews: Blazing the trails before beating the path: Sample-efficient Monte-Carlo planning

Neural Information Processing Systems

The paper considers an interesting and important problem. The results can be interpreted as a natural combination of the planning algorithm of Busoniou and Munos (2012) with the sampling method of Kearns et al (1999). However, the paper introduces a few more tricks to make this idea work (e.g., balances confidence intervals and uncertainties at different parts of the planning tree). The presentation is quite nice and the authors try to give the intuition behind the choices in designing the algorithm. The clarity could be improved by noting that the MAX part of the algorithm is in fact action elimination for best arm identification (can't you use some of the existing results instead of reproving everything from scratch?).


Blazing the trails before beating the path: Sample-efficient Monte-Carlo planning

Grill, Jean-Bastien, Valko, Michal, Munos, Remi

Neural Information Processing Systems

We study the sampling-based planning problem in Markov decision processes (MDPs) that we can access only through a generative model, usually referred to as Monte-Carlo planning. Our objective is to return a good estimate of the optimal value function at any state while minimizing the number of calls to the generative model, i.e. the sample complexity. We propose a new algorithm, TrailBlazer, able to handle MDPs with a finite or an infinite number of transitions from state-action to next states. TrailBlazer is an adaptive algorithm that exploits possible structures of the MDP by exploring only a subset of states reachable by following near-optimal policies. We provide bounds on its sample complexity that depend on a measure of the quantity of near-optimal states.